64 research outputs found

    Efficient randomised broadcasting in random regular networks with applications in peer-to-peer systems

    Get PDF
    We consider broadcasting in random d-regular graphs by using a simple modification of the random phone call model introduced by Karp et al. (Proceedings of the FOCS ’00, 2000). In the phone call model, in every time step, each node calls a randomly chosen neighbour to establish a communication channel to this node. The communication channels can then be used bi-directionally to transmit messages. We show that, if we allow every node to choose four distinct neighbours instead of one, then the average number of message transmissions per node required to broadcast a message efficiently decreases exponentially. Formally, we present an algorithm that has time complexity O(logn) and uses O(nloglogn) transmissions per message. In contrast, we show for the standard model that every distributed algorithm in a restricted address-oblivious model that broadcasts a message in time O(logn) requires Ω(nlogn/logd) message transmissions. Our algorithm efficiently handles limited communication failures, only requires rough estimates of the number of nodes, and is robust against limited changes in the size of the network. Our results have applications in peer-to-peer networks and replicated databases. Preliminary version published in the Proceedings of the 27th Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing (PODC 2008)

    Randomized diffusion for indivisible loads

    Get PDF
    We present a new randomized diffusion-based algorithm for balancing indivisible tasks (tokens) on a network. Our aim is to minimize the discrepancy between the maximum and minimum load. The algorithm works as follows. Every vertex distributes its tokens as evenly as possible among its neighbors and itself. If this is not possible without splitting some tokens, the vertex redistributes its excess tokens among all its neighbors randomly (without replacement). In this paper we prove several upper bounds on the load discrepancy for general networks. These bounds depend on some expansion properties of the network, that is, the second largest eigenvalue, and a novel measure which we refer to as refined local divergence. We then apply these general bounds to obtain results for some specific networks. For constant-degree expanders and torus graphs, these yield exponential improvements on the discrepancy bounds. For hypercubes we obtain a polynomial improvement

    Threshold Load Balancing with Weighted Tasks

    Get PDF
    We study threshold-based load balancing protocols for weighted tasks. We are given an arbitrary graph G with n nodes (resources, bins) and m > n tasks (balls). Initially the tasks are distributed arbitrarily over the n nodes. The resources have a threshold and we are interested in the balancing time, i.e., the time it takes until the load of all resources is below the threshold. We distinguish between resource-based and user based protocols. In the case of resource-based protocols resources with a load larger than the threshold are allowed to send tasks to neighbouring resources. In the case of user-based protocols tasks allocated to resources with a load above the threshold decide on their own whether to migrate to a neighbouring resource or not. For resource-controlled protocols we present results for arbitrary graphs. Our bounds are in terms of the mixing time (for above-average thresholds) and the hitting time (for tight thresholds) of the graph. We relate the balancing time of resource-controlled protocols for above-average thresholds in arbitrary graphs to the mixing time of the graph and to the hitting time for tight thresholds. Our bounds are tight and, surprisingly, they are independent of the weights of the tasks. For the user-controlled migration we consider complete graphs and derive bounds for both above-average and tight thresholds

    Self-stabilizing Balls & Bins in Batches: The Power of Leaky Bins

    Get PDF
    A fundamental problem in distributed computing is the distribution of requests to a set of uniform servers without a centralized controller. Classically, such problems are modelled as static balls into bins processes, where m balls (tasks) are to be distributed to n bins (servers). In a seminal work, [Azar et al.; JoC'99] proposed the sequential strategy Greedy[d] for n = m. When thrown, a ball queries the load of d random bins and is allocated to a least loaded of these. [Azar et al.; JoC'99] showed that d=2 yields an exponential improvement compared to d=1. [Berenbrink et al.; JoC'06] extended this to m ⇒ n, showing that the maximal load difference is independent of m for d=2 (in contrast to d=1). We propose a new variant of an infinite balls into bins process. In each round an expected number of λ n new balls arrive and are distributed (in parallel) to the bins and each non-empty bin deletes one of its balls. This setting models a set of servers processing incoming requests, where clients can query a server's current load but receive no information about parallel requests. We study the Greedy[d] distribution scheme in this setting and show a strong self-stabilizing property: For any arrival rate λ=λ(n) < 1, the system load is time-invariant. Moreover, for any (even super-exponential) round t, the maximum system load is (w.h.p.) O(1 over 1-λ•logn over 1-λ) for d=1 and O(log n over 1-λ) for d=2. In particular, Greedy[2] has an exponentially smaller system load for high arrival rates

    Plurality consensus in arbitrary graphs : lessons learned from load balancing.

    Get PDF
    We consider plurality consensus in networks of n nodes. Initially, each node has one of k opinions. The nodes execute a (randomized) distributed protocol to agree on the plurality opinion (the opinion initially supported by the most nodes). In certain types of networks the nodes can be quite cheap and simple, and hence one seeks protocols that are not only time efficient but also simple and space efficient. Typically, protocols depend heavily on the employed communication mechanism, which ranges from sequential (only one pair of nodes communicates at any time) to fully parallel (all nodes communicate with all their neighbors at once) and everything in-between. We propose a framework to design protocols for a multitude of communication mechanisms. We introduce protocols that solve the plurality consensus problem and are, with probability 1-o(1), both time and space efficient. Our protocols are based on an interesting relationship between plurality consensus and distributed load balancing. This relationship allows us to design protocols that generalize the state of the art for a large range of problem parameters

    Plurality Consensus in Arbitrary Graphs: Lessons Learned from Load Balancing

    Get PDF
    We consider plurality consensus in networks of n nodes. Initially, each node has one of k opinions. The nodes execute a (randomized) distributed protocol to agree on the plurality opinion (the opinion initially supported by the most nodes). In certain types of networks the nodes can be quite cheap and simple, and hence one seeks protocols that are not only time efficient but also simple and space efficient. Typically, protocols depend heavily on the employed communication mechanism, which ranges from sequential (only one pair of nodes communicates at any time) to fully parallel (all nodes communicate with all their neighbors at once) and everything in-between. We propose a framework to design protocols for a multitude of communication mechanisms. We introduce protocols that solve the plurality consensus problem and are, with probability 1-o(1), both time and space efficient. Our protocols are based on an interesting relationship between plurality consensus and distributed load balancing. This relationship allows us to design protocols that generalize the state of the art for a large range of problem parameters

    Efficient plurality consensus, or : the benefits of cleaning up from time to time.

    Get PDF
    Plurality consensus considers a network of n nodes, each having one of k opinions. Nodes execute a (randomized) distributed protocol with the goal that all nodes adopt the plurality (the opinion initially supported by the most nodes). Communication is realized via the Gossip (or random phone call) model. A major open question has been whether there is a protocol for the complete graph that converges (w.h.p.) in polylogarithmic time and uses only polylogarithmic memory per node (local memory). We answer this question affirmatively. We propose two protocols that need only mild assumptions on the bias in favor of the plurality. As an example of our results, consider the complete graph and an arbitrarily small constant multiplicative bias in favor of the plurality. Our first protocol achieves plurality consensus in O(log(k)*log(log(n))) rounds using log(k) + Theta(log(log(k))) bits of local memory. Our second protocol achieves plurality consensus in O(log(n)*log(log(n))) rounds using only log(k) + 4 bits of local memory. This disproves a conjecture by Becchetti et al. (SODA'15) implying that any protocol with local memory log(k)+O(1) has worst-case runtime Omega(k). We provide similar bounds for much weaker bias assumptions. At the heart of our protocols lies an undecided state, an idea introduced by Angluin et al. (Distributed Computing'08)
    • …
    corecore